OpenAI: OpenAI announced an update for its most advanced AI language model GPT-4 Turbo on Wednesday. AI models now also have vision capabilities, allowing ChatGPT to analyze multimedia inputs. This means that now ChatGPT can analyze images and show its insights to users.
New feature of ChatGPT
This capability of ChatGPT will be available to developers in the API as well as to the general public through ChatGPT. The developers of OpenAI announced GPT-4 Vision in a post from their official account on X (formerly Twitter). Information has been given through this post that “GPT-4 Turbo with Vision is now available in the API. Vision requests can now also use JSON mode and function calling.”
With vision capabilities, GPT-4 Turbo can analyze any picture and provide complete information about it to its users. The company has also shared some examples of how this feature will work. Many brands around the world will be using the updated API with vision capabilities.
Special features of this feature
Bengaluru-based Healtyfy Me is also using its updated API with vision capabilities to make tracking macros easier for its customers. With its help, users have to point their camera towards the food and then the AI model will tell macros and suggest whether you need to take a walk after eating that food or not.
This feature will be available for Plus users for ChatGPT. If you do not know about ChatGPT Plus service, then let us tell you that ChatGPT Plus is a paid service, for which users have to spend money. The monthly charge for ChatGPT Plus service is $20. After the launch of this new vision feature of ChatGPT, if users send any picture to ChatGPT, it will tell the complete details and insights of that picture. For example, if you send a photo of the Taj Mahal to ChatGPT, it will give you information about where the Taj Mahal is, what is its specialty, when was it built, what stones were used to build it, etc.
Also read:
Realme launches a cool phone, you will feel like buying it as soon as you see the photo